502 research outputs found

    A posteriori analysis of discontinuous galerkin schemes for systems of hyperbolic conservation laws

    Get PDF
    In this work we construct reliable a posteriori estimates for some semi- (spatially) discrete discontinuous Galerkin schemes applied to nonlinear systems of hyperbolic conservation laws. We make use of appropriate reconstructions of the discrete solution together with the relative entropy stability framework, which leads to error control in the case of smooth solutions. The methodology we use is quite general and allows for a posteriori control of discontinuous Galerkin schemes with standard flux choices which appear in the approximation of conservation laws. In addition to the analysis, we conduct some numerical benchmarking to test the robustness of the resultant estimator

    A posteriori error control for fully discrete Crank–Nicolson schemes

    Get PDF
    We derive residual-based a posteriori error estimates of optimal order for fully discrete approximations for linear parabolic problems. The time discretization uses the Crank--Nicolson method, and the space discretization uses finite element spaces that are allowed to change in time. The main tool in our analysis is the comparison with an appropriate reconstruction of the discrete solution, which is introduced in the present paper

    A new accuracy measure based on bounded relative error for time series forecasting

    Get PDF
    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred

    Extrapolation for Time-Series and Cross-Sectional Data

    Get PDF
    Extrapolation methods are reliable, objective, inexpensive, quick, and easily automated. As a result, they are widely used, especially for inventory and production forecasts, for operational planning for up to two years ahead, and for long-term forecasts in some situations, such as population forecasting. This paper provides principles for selecting and preparing data, making seasonal adjustments, extrapolating, assessing uncertainty, and identifying when to use extrapolation. The principles are based on received wisdom (i.e., experts’ commonly held opinions) and on empirical studies. Some of the more important principles are:• In selecting and preparing data, use all relevant data and adjust the data for important events that occurred in the past.• Make seasonal adjustments only when seasonal effects are expected and only if there is good evidence by which to measure them.• In extrapolating, use simple functional forms. Weight the most recent data heavily if there are small measurement errors, stable series, and short forecast horizons. Domain knowledge and forecasting expertise can help to select effective extrapolation procedures. When there is uncertainty, be conservative in forecasting trends. Update extrapolation models as new data are received.• To assess uncertainty, make empirical estimates to establish prediction intervals.• Use pure extrapolation when many forecasts are required, little is known about the situation, the situation is stable, and expert forecasts might be biased

    Forecasting the price of gold

    Get PDF
    This article seeks to evaluate the appropriateness of a variety of existing forecasting techniques (17 methods) at providing accurate and statistically significant forecasts for gold price. We report the results from the nine most competitive techniques. Special consideration is given to the ability of these techniques to provide forecasts which outperforms the random walk (RW) as we noticed that certain multivariate models (which included prices of silver, platinum, palladium and rhodium, besides gold) were also unable to outperform the RW in this case. Interestingly, the results show that none of the forecasting techniques are able to outperform the RW at horizons of 1 and 9 steps ahead, and on average, the exponential smoothing model is seen providing the best forecasts in terms of the lowest root mean squared error over the 24-month forecasting horizons. Moreover, we find that the univariate models used in this article are able to outperform the Bayesian autoregression and Bayesian vector autoregressive models, with exponential smoothing reporting statistically significant results in comparison with the former models, and classical autoregressive and the vector autoregressive models in most cases

    Artificial Intelligence in Swedish Policies::Values, benefits, considerations and risks

    Get PDF
    Part 4: AI, Data Analytics and Automated Decision MakingInternational audienceArtificial intelligence (AI) is said to be the next big phase in digitalization. There is a global ongoing race to develop, implement and make use of AI in both the private and public sector. The many responsibilities of governments in this race are complicated and cut across a number of areas. Therefore, it is important that the use of AI supports these diverse aspects of governmental commitments and values. The aim of this paper is to analyze how AI is portrayed in Swedish policy documents and what values are attributed to the use of AI. We analyze Swedish policy documents and map benefits, considerations and risks with AI into different value ideals, based on an established e-government value framework. We conclude that there is a discrepancy in the policy level discourse on the use of AI between different value ideals. Our findings show that AI is strongly associated with improving efficiency and service quality in line with previous e-government policy studies. Interestingly, few benefits are highlighted concerning engagement of citizens in policy making. A more nuanced view on AI is needed for creating realistic expectations on how this technology can benefit society

    Variational Multiscale Stabilization and the Exponential Decay of Fine-scale Correctors

    Full text link
    This paper addresses the variational multiscale stabilization of standard finite element methods for linear partial differential equations that exhibit multiscale features. The stabilization is of Petrov-Galerkin type with a standard finite element trial space and a problem-dependent test space based on pre-computed fine-scale correctors. The exponential decay of these correctors and their localisation to local cell problems is rigorously justified. The stabilization eliminates scale-dependent pre-asymptotic effects as they appear for standard finite element discretizations of highly oscillatory problems, e.g., the poor L2L^2 approximation in homogenization problems or the pollution effect in high-frequency acoustic scattering

    Forecasting Player Behavioral Data and Simulating in-Game Events

    Full text link
    Understanding player behavior is fundamental in game data science. Video games evolve as players interact with the game, so being able to foresee player experience would help to ensure a successful game development. In particular, game developers need to evaluate beforehand the impact of in-game events. Simulation optimization of these events is crucial to increase player engagement and maximize monetization. We present an experimental analysis of several methods to forecast game-related variables, with two main aims: to obtain accurate predictions of in-app purchases and playtime in an operational production environment, and to perform simulations of in-game events in order to maximize sales and playtime. Our ultimate purpose is to take a step towards the data-driven development of games. The results suggest that, even though the performance of traditional approaches such as ARIMA is still better, the outcomes of state-of-the-art techniques like deep learning are promising. Deep learning comes up as a well-suited general model that could be used to forecast a variety of time series with different dynamic behaviors

    Global Warming: Forecasts by Scientists versus Scientific Forecasts

    Get PDF
    In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no”. To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making. The IPCC WG1 Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references in the 1056-page Report to the primary sources of information on forecasting methods despite the fact these are conveniently available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder
    corecore